目前最先进的基于深度学习的面部识别(FR)模型需要大量的核心训练身份。然而,由于隐私意识不断增长,禁止访问用户设备上的面部图像以不断改进面部识别模型。联合学习(FL)是一种解决隐私问题的技术,可以在不共享客户端之间的数据的情况下协作优化模型。在这项工作中,我们提出了一个基于FLIS的框架,称为FEDFR,以通过隐私感知方式改进通用面部表示。此外,该框架通过所提出的解耦特征定制模块共同优化相应客户端的个性化模型。客户特定的个性化模型可以服务于本地设备的注册标识所需的优化面部识别体验。据我们所知,我们是第一个探索FL Setup中的个性化脸部识别的人。拟议的框架被验证,优于以前的几种通用和个性化的面部识别基准与多种情景的识别基准。源代码和我们提出的个性化FR基准下的FL Setup可用于https://github.com/jackie840129/fedfr。
translated by 谷歌翻译
非盲折叠是一个不良问题。大多数现有方法通常将该问题与最大-A-Bouthiori框架制定,并通过设计潜在清晰图像的类型的正则化术语和数据项来解决它。在本文中,我们通过学习鉴别性收缩函数来提出有效的非盲折叠方法来隐含地模拟这些术语。与使用深度卷积神经网络(CNNS)或径向基函数的大多数现有方法来说,我们简单地学习正则化术语,我们制定数据项和正则化术语,并将解构模型分成与数据相关和正则化相关的子 - 根据乘法器的交替方向方法问题。我们探讨了Maxout函数的属性,并使用颤扬层开发一个深入的CNN模型,以学习直接近似对这两个子问题的解决方案的判别缩小功能。此外,考虑到基于快速的傅里叶变换的图像恢复通常导致振铃伪像,而基于共轭梯度的图像恢复是耗时的,我们开发共轭梯度网络以有效且有效地恢复潜在的清晰图像。实验结果表明,该方法在效率和准确性方面对最先进的方法有利地执行。
translated by 谷歌翻译
通过对比学习,自我监督学习最近在视觉任务中显示了巨大的潜力,这旨在在数据集中区分每个图像或实例。然而,这种情况级别学习忽略了实例之间的语义关系,有时不希望地从语义上类似的样本中排斥锚,被称为“假否定”。在这项工作中,我们表明,对于具有更多语义概念的大规模数据集来说,虚假否定的不利影响更为重要。为了解决这个问题,我们提出了一种新颖的自我监督的对比学习框架,逐步地检测并明确地去除假阴性样本。具体地,在训练过程之后,考虑到编码器逐渐提高,嵌入空间变得更加语义结构,我们的方法动态地检测增加的高质量假否定。接下来,我们讨论两种策略,以明确地在对比学习期间明确地消除检测到的假阴性。广泛的实验表明,我们的框架在有限的资源设置中的多个基准上表现出其他自我监督的对比学习方法。
translated by 谷歌翻译
This paper introduces a learned hierarchical B-frame coding scheme in response to the Grand Challenge on Neural Network-based Video Coding at ISCAS 2023. We address specifically three issues, including (1) B-frame coding, (2) YUV 4:2:0 coding, and (3) content-adaptive variable-rate coding with only one single model. Most learned video codecs operate internally in the RGB domain for P-frame coding. B-frame coding for YUV 4:2:0 content is largely under-explored. In addition, while there have been prior works on variable-rate coding with conditional convolution, most of them fail to consider the content information. We build our scheme on conditional augmented normalized flows (CANF). It features conditional motion and inter-frame codecs for efficient B-frame coding. To cope with YUV 4:2:0 content, two conditional inter-frame codecs are used to process the Y and UV components separately, with the coding of the UV components conditioned additionally on the Y component. Moreover, we introduce adaptive feature modulation in every convolutional layer, taking into account both the content information and the coding levels of B-frames to achieve content-adaptive variable-rate coding. Experimental results show that our model outperforms x265 and the winner of last year's challenge on commonly used datasets in terms of PSNR-YUV.
translated by 谷歌翻译
Recently, e-scooter-involved crashes have increased significantly but little information is available about the behaviors of on-road e-scooter riders. Most existing e-scooter crash research was based on retrospectively descriptive media reports, emergency room patient records, and crash reports. This paper presents a naturalistic driving study with a focus on e-scooter and vehicle encounters. The goal is to quantitatively measure the behaviors of e-scooter riders in different encounters to help facilitate crash scenario modeling, baseline behavior modeling, and the potential future development of in-vehicle mitigation algorithms. The data was collected using an instrumented vehicle and an e-scooter rider wearable system, respectively. A three-step data analysis process is developed. First, semi-automatic data labeling extracts e-scooter rider images and non-rider human images in similar environments to train an e-scooter-rider classifier. Then, a multi-step scene reconstruction pipeline generates vehicle and e-scooter trajectories in all encounters. The final step is to model e-scooter rider behaviors and e-scooter-vehicle encounter scenarios. A total of 500 vehicle to e-scooter interactions are analyzed. The variables pertaining to the same are also discussed in this paper.
translated by 谷歌翻译
As one of the most popular micro-mobility options, e-scooters are spreading in hundreds of big cities and college towns in the US and worldwide. In the meantime, e-scooters are also posing new challenges to traffic safety. In general, e-scooters are suggested to be ridden in bike lanes/sidewalks or share the road with cars at the maximum speed of about 15-20 mph, which is more flexible and much faster than the pedestrains and bicyclists. These features make e-scooters challenging for human drivers, pedestrians, vehicle active safety modules, and self-driving modules to see and interact. To study this new mobility option and address e-scooter riders' and other road users' safety concerns, this paper proposes a wearable data collection system for investigating the micro-level e-Scooter motion behavior in a Naturalistic road environment. An e-Scooter-based data acquisition system has been developed by integrating LiDAR, cameras, and GPS using the robot operating system (ROS). Software frameworks are developed to support hardware interfaces, sensor operation, sensor synchronization, and data saving. The integrated system can collect data continuously for hours, meeting all the requirements including calibration accuracy and capability of collecting the vehicle and e-Scooter encountering data.
translated by 谷歌翻译
In this paper, we propose SceNDD: a scenario-based naturalistic driving dataset that is built upon data collected from an instrumented vehicle in downtown Indianapolis. The data collection was completed in 68 driving sessions with different drivers, where each session lasted about 20--40 minutes. The main goal of creating this dataset is to provide the research community with real driving scenarios that have diverse trajectories and driving behaviors. The dataset contains ego-vehicle's waypoints, velocity, yaw angle, as well as non-ego actor's waypoints, velocity, yaw angle, entry-time, and exit-time. Certain flexibility is provided to users so that actors, sensors, lanes, roads, and obstacles can be added to the existing scenarios. We used a Joint Probabilistic Data Association (JPDA) tracker to detect non-ego vehicles on the road. We present some preliminary results of the proposed dataset and a few applications associated with it. The complete dataset is expected to be released by early 2023.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
As the demand for user privacy grows, controlled data removal (machine unlearning) is becoming an important feature of machine learning models for data-sensitive Web applications such as social networks and recommender systems. Nevertheless, at this point it is still largely unknown how to perform efficient machine unlearning of graph neural networks (GNNs); this is especially the case when the number of training samples is small, in which case unlearning can seriously compromise the performance of the model. To address this issue, we initiate the study of unlearning the Graph Scattering Transform (GST), a mathematical framework that is efficient, provably stable under feature or graph topology perturbations, and offers graph classification performance comparable to that of GNNs. Our main contribution is the first known nonlinear approximate graph unlearning method based on GSTs. Our second contribution is a theoretical analysis of the computational complexity of the proposed unlearning mechanism, which is hard to replicate for deep neural networks. Our third contribution are extensive simulation results which show that, compared to complete retraining of GNNs after each removal request, the new GST-based approach offers, on average, a $10.38$x speed-up and leads to a $2.6$% increase in test accuracy during unlearning of $90$ out of $100$ training graphs from the IMDB dataset ($10$% training ratio).
translated by 谷歌翻译
肺气道树建模对于诊断肺部疾病的诊断至关重要,尤其是对于X射线计算机断层扫描(CT)。 CT图像上的气道树建模可以为专家提供3维测量,例如壁厚等。此信息可以极大地帮助诊断诸如慢性阻塞性肺疾病等肺部疾病[1-4]。许多学者尝试了各种方法来建模肺气道树,可以根据其性质将其分为两个主要类别。也就是说,基于模型的方法和深度学习方法。基于典型模型的方法的性能通常取决于模型参数的手动调整,这可能是其优点和缺点。优势是它不需要大量的培训数据,这可能对像医学成像这样的小数据集有益。另一方面,基于模型的性能可能是错误的[5,6]。近年来,深度学习在医学图像处理领域取得了良好的结果,许多学者在医学图像分割中使用了基于UNET的方法[7-11]。在UNET的所有变化中,UNET 3+ [11]具有相对较好的结果,与UNET的其余部分相比。因此,为了进一步提高肺气道建模的准确性,本研究将Frangi滤波器[5]与UNET 3+ [11]结合在一起,以开发双通道3D UNET 3+。 Frangi过滤器用于提取类似容器的特征。然后,类似容器的功能用作指导双通道UNET 3+训练和测试程序的输入。
translated by 谷歌翻译